34 research outputs found
A Bayesian approach to constrained single- and multi-objective optimization
This article addresses the problem of derivative-free (single- or
multi-objective) optimization subject to multiple inequality constraints. Both
the objective and constraint functions are assumed to be smooth, non-linear and
expensive to evaluate. As a consequence, the number of evaluations that can be
used to carry out the optimization is very limited, as in complex industrial
design optimization problems. The method we propose to overcome this difficulty
has its roots in both the Bayesian and the multi-objective optimization
literatures. More specifically, an extended domination rule is used to handle
objectives and constraints in a unified way, and a corresponding expected
hyper-volume improvement sampling criterion is proposed. This new criterion is
naturally adapted to the search of a feasible point when none is available, and
reduces to existing Bayesian sampling criteria---the classical Expected
Improvement (EI) criterion and some of its constrained/multi-objective
extensions---as soon as at least one feasible point is available. The
calculation and optimization of the criterion are performed using Sequential
Monte Carlo techniques. In particular, an algorithm similar to the subset
simulation method, which is well known in the field of structural reliability,
is used to estimate the criterion. The method, which we call BMOO (for Bayesian
Multi-Objective Optimization), is compared to state-of-the-art algorithms for
single- and multi-objective constrained optimization
A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)
Meeting abstrac
Une approche Bayésienne pour l'optimisation multi-objectif sous contraintes
In this thesis, we address the problem of the derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints. In particular, we consider a setting where the objectives and constraints of the problem are evaluated simultaneously using a potentially time-consuming computer program. To solve this problem, we propose a Bayesian optimization algorithm called BMOO. This algorithm implements a new expected improvement sampling criterion crafted to apply to potentially heavily constrained problems and to many-objective problems. This criterion stems from the use of the hypervolume of the dominated region as a loss function, where the dominated region is defined using an extended domination rule that applies jointly on the objectives and constraints. Several criteria from the Bayesian optimization literature are recovered as special cases. The criterion takes the form of an integral over the space of objectives and constraints for which no closed form expression exists in the general case. Besides, it has to be optimized at every iteration of the algorithm. To solve these difficulties, specific sequential Monte-Carlo algorithms are also proposed. The effectiveness of BMOO is shown on academic test problems and on four real-life design optimization problems.Ces travaux de thèse portent sur l'optimisation multi-objectif de fonctions à valeurs réelles sous contraintes d'inégalités. En particulier, nous nous intéressons à des problèmes pour lesquels les fonctions objectifs et contraintes sont évaluées au moyen d'un programme informatique nécessitant potentiellement plusieurs heures de calcul pour retourner un résultat. Dans ce cadre, il est souhaitable de résoudre le problème d'optimisation en utilisant le moins possible d'appels au code de calcul. Afin de résoudre ce problème, nous proposons dans cette thèse un algorithme d'optimisation Bayésienne baptiséBMOO. Cet algorithme est fondé sur un nouveau critère d'amélioration espérée construit afin d'être applicable à des problèmes fortement contraints et/ou avecde nombreux objectifs. Ce critère s'appuie sur une fonction de perte mesurant le volume de l'espace dominé par les observations courantes, ce dernier étant défini au moyen d'une règle de domination étendue permettant de comparer des solutions potentielles à la fois selon les valeurs des objectifs et des contraintes qui leurs sont associées. Le critère ainsi défini généralise plusieurs critères classiques d'amélioration espérée issus de la littérature. Il prend la forme d'une intégrale définie sur l'espace des objectifs et des contraintes pour laquelle aucune forme fermée n'est connue dans leas général. De plus, il doit être optimisé à chaque itération de l'algorithme.Afin de résoudre ces difficultés, des algorithmes de Monte-Carlo séquentiel sont également proposés. L'efficacité de BMOO est illustrée à la fois sur des cas tests académiques et sur quatre problèmes d'optimisation représentant de réels problèmes de conception
A Bayesian subset simulation approach to constrained global optimization of expensive-to-evaluate black-box functions
International audienceThis talk addresses the problem of derivative-free global optimization of a real-valued function under multiple inequality constraints. Both the objective function and the constraint functions are assumed to be smooth, nonlinear, expensive-to-evaluate black-box functions. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited. We focus in this work on the case of strongly constrained problems, where finding a feasible design, using such a limited budget of simulations, is a challenge in itself. The method that we propose to overcome this difficulty has its roots in the recent literature on Gaussian process-based methods for reliability analysis—in particular, the Bayesian Subset Simulation (BSS) algorithm of Li, Bect and Vazquez—and multi-objective optimization. More specifically, we consider a decreasing sequence of nested subsets of the design space, which is defined and explored sequentially using a combination of Sequential Monte Carlo (SMC) techniques and sequential Bayesian design of experiments. The proposed method obtains promising result on challenging test cases from the literature
A Bayesian approach to constrained multi-objective optimization
Ces travaux de thèse portent sur l'optimisation multi-objectif de fonctions à valeurs réelles sous contraintes d'inégalités. En particulier, nous nous intéressons à des problèmes pour lesquels les fonctions objectifs et contraintes sont évaluées au moyen d'un programme informatique nécessitant potentiellement plusieurs heures de calcul pour retourner un résultat. Dans ce cadre, il est souhaitable de résoudre le problème d'optimisation en utilisant le moins possible d'appels au code de calcul. Afin de résoudre ce problème, nous proposons dans cette thèse un algorithme d'optimisation Bayésienne baptiséBMOO. Cet algorithme est fondé sur un nouveau critère d'amélioration espérée construit afin d'être applicable à des problèmes fortement contraints et/ou avecde nombreux objectifs. Ce critère s'appuie sur une fonction de perte mesurant le volume de l'espace dominé par les observations courantes, ce dernier étant défini au moyen d'une règle de domination étendue permettant de comparer des solutions potentielles à la fois selon les valeurs des objectifs et des contraintes qui leurs sont associées. Le critère ainsi défini généralise plusieurs critères classiques d'amélioration espérée issus de la littérature. Il prend la forme d'une intégrale définie sur l'espace des objectifs et des contraintes pour laquelle aucune forme fermée n'est connue dans leas général. De plus, il doit être optimisé à chaque itération de l'algorithme.Afin de résoudre ces difficultés, des algorithmes de Monte-Carlo séquentiel sont également proposés. L'efficacité de BMOO est illustrée à la fois sur des cas tests académiques et sur quatre problèmes d'optimisation représentant de réels problèmes de conception.In this thesis, we address the problem of the derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints. In particular, we consider a setting where the objectives and constraints of the problem are evaluated simultaneously using a potentially time-consuming computer program. To solve this problem, we propose a Bayesian optimization algorithm called BMOO. This algorithm implements a new expected improvement sampling criterion crafted to apply to potentially heavily constrained problems and to many-objective problems. This criterion stems from the use of the hypervolume of the dominated region as a loss function, where the dominated region is defined using an extended domination rule that applies jointly on the objectives and constraints. Several criteria from the Bayesian optimization literature are recovered as special cases. The criterion takes the form of an integral over the space of objectives and constraints for which no closed form expression exists in the general case. Besides, it has to be optimized at every iteration of the algorithm. To solve these difficulties, specific sequential Monte-Carlo algorithms are also proposed. The effectiveness of BMOO is shown on academic test problems and on four real-life design optimization problems
BMOO: a Bayesian Multi-Objective Optimization algorithm
National audienceWe address the problem of derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints
A Bayesian approach to constrained multi-objective optimization
National audienceThis communication addresses the problem of derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints, under a Bayesian framework. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive-to-evaluate. As a consequence, the number of evaluations to carry out the optimization is very limited. This set-up typically applies to complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle the constraints and a corresponding expected hyper-volume improvement criterion is proposed. The calculation of this class of criteria is known to become difficult as the number of objectives increases. To address this difficulty, we propose a novel approach, making use of Sequential Monte Carlo techniques. Moreover we also use Sequential Monte Carlo techniques for the optimization of our new criterion. The performance of the proposed method is evaluated on a set of test problems coming from the literature and compared with reference methods